5 research outputs found

    Clinical Utility Gains from Incorporating Comorbidity and Geographic Location Information into Risk Estimation Equations for Atherosclerotic Cardiovascular Disease

    Full text link
    Objective: There are several efforts to re-learn the 2013 ACC/AHA pooled cohort equations (PCE) for patients with specific comorbidities and geographic locations. With over 363 customized risk models in the literature, we aim to evaluate such revised models to determine if the performance improvements translate to gains in clinical utility. Methods: We re-train a baseline PCE using the ACC/AHA PCE variables and revise it to incorporate subject-level geographic location and comorbidity information. We apply fixed effects, random effects, and extreme gradient boosting models to handle the correlation and heterogeneity induced by locations. Models are trained using 2,464,522 claims records from Optum Clinformatics Data Mart and validated in the hold-out set (N=1,056,224). We evaluate models' performance overall and across subgroups defined by the presence or absence of chronic kidney disease (CKD) or rheumatoid arthritis (RA) and geographic locations. We evaluate models' expected net benefit using decision curve analysis and models' statistical properties using several discrimination and calibration metrics. Results: The baseline PCE is miscalibrated overall, in patients with CKD or RA, and locations with small populations. Our revised models improved both the overall (GND P-value=0.41) and subgroup calibration but only enhanced net benefit in the underrepresented subgroups. The gains are larger in the subgroups with comorbidities and heterogeneous across geographic locations. Conclusions: Revising the PCE with comorbidity and location information significantly enhanced models' calibration; however, such improvements do not necessarily translate to clinical gains. Thus, we recommend future works to quantify the consequences from using risk calculators to guide clinical decisions

    Reconstruction and Analysis of Mitochondrial Morphology and Distribution in Neocortical Neurons using 3D Deep Learning

    No full text
    Mitochondria play a crucial role in the functioning of neurons by synthesizing adenosine triphos- phate (ATP), a compound used to deliver energy intracellularly, and by buffering intracellular Ca2+. Both are necessary for neuron-to-neuron signal transmission via synapses. Mitochondria are highly dynamic organelles, constantly changing through fusion and fission events, moving intracellularly and often forming long filaments in order to adapt to a changing cell environment. The morphology and location of mitochondria can give insight into the amount of energy required for processes to proceed in different parts of the cell. Previous studies, which have focused on small fragments of the hippocampus, have identified the role of mitochondria in the creation of new synapses and responding to hypoxic conditions. Differences have also been observed in the distribution and level of connectivity of mitochondria between dendrites, axons and cell bodies. Although the presence of mitochondria in dendritic spines has been disputed, it has been observed in CA3. Studies of larger areas of various brain regions are necessary to thoroughly describe the distribution of neural mitochondria, in order to gain more insight into the functioning of this crucial organelle. Many studies thus far have observed single neurons in vivo or depended on manual reconstructions of electron microscopy (EM) images. In the recent years, various semiautomatic and fully automatic approaches have been explored to identify mitochondria in cells. The deep learning approach to semantic segmentation of mitochondria in EM images developed by Dorkenwald et al. (2017) has achieved highest accuracy. We apply a similar deep learning approach to identify mitochondria in a 196 x 130 x 40 μm3 TEM dataset of mouse primary visual cortex (V1) collected by the Allen Institute as part of the iARPA MICrONS program. The dataset contains cell reconstructions and single-cell functional data, which can be used with our mitochondrial profiles. We use a 3D neural network to classify each dataset voxel as either mitochondrion or background and reconstruct individual mitochondria objects using connected components. Applying the efficient CPU implementation of the RSUNet we achieve the F1 score of 0.867, with the inference speed is 3.5 CPU hours per gigavoxel. We reconstruct over 900,000 mitochondria objects in the volume. We assign each mitochondrion to a cell segmentation and morphological cell part (axon, dendrite, soma) in the volume. For cells with somas in the volume we calculate the distance from soma center of each mitochondrion object and calculate the mitochondrion volume density distribution as function of distance from soma center. We consider mitochondria volume and length variability, as well as the proportion of axon and dendrite lengths containing mitochondria across cell types. We observe filaments spanning the entire dataset (over 170μm in length) in apical dendrites, exceeding the length of the largest filaments reported so far by more than a factor of three. Surprisingly, we also find small mitochondria in dendritic spines in V1

    A comparison of approaches to improve worst-case predictive model performance over patient subpopulations

    No full text
    AbstractPredictive models for clinical outcomes that are accurate on average in a patient population may underperform drastically for some subpopulations, potentially introducing or reinforcing inequities in care access and quality. Model training approaches that aim to maximize worst-case model performance across subpopulations, such as distributionally robust optimization (DRO), attempt to address this problem without introducing additional harms. We conduct a large-scale empirical study of DRO and several variations of standard learning procedures to identify approaches for model development and selection that consistently improve disaggregated and worst-case performance over subpopulations compared to standard approaches for learning predictive models from electronic health records data. In the course of our evaluation, we introduce an extension to DRO approaches that allows for specification of the metric used to assess worst-case performance. We conduct the analysis for models that predict in-hospital mortality, prolonged length of stay, and 30-day readmission for inpatient admissions, and predict in-hospital mortality using intensive care data. We find that, with relatively few exceptions, no approach performs better, for each patient subpopulation examined, than standard learning procedures using the entire training dataset. These results imply that when it is of interest to improve model performance for patient subpopulations beyond what can be achieved with standard practices, it may be necessary to do so via data collection techniques that increase the effective sample size or reduce the level of noise in the prediction problem.</jats:p
    corecore